skip to main content


Search for: All records

Creators/Authors contains: "Weimer, James"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available May 9, 2024
  2. Classification of clinical alarms is at the heart of prioritization, suppression, integration, postponement, and other methods of mitigating alarm fatigue. Since these methods directly affect clinical care, alarm classifiers, such as intelligent suppression systems, need to be evaluated in terms of their sensitivity and specificity, which is typically calculated on a labeled dataset of alarms. Unfortunately, the collection and particularly labeling of such datasets requires substantial effort and time, thus deterring hospitals from investigating mitigations of alarm fatigue. This article develops a lightweight method for evaluating alarm classifiers without perfect alarm labels. The method relies on probabilistic labels obtained from data programming—a labeling paradigm based on combining noisy and cheap-to-obtain labeling heuristics. Based on these labels, the method produces confidence bounds for the sensitivity/specificity values from a hypothetical evaluation with manual labeling. Our experiments on five alarm datasets collected at Children’s Hospital of Philadelphia show that the proposed method provides accurate bounds on the classifier’s sensitivity/specificity, appropriately reflecting the uncertainty from noisy labeling and limited sample sizes. 
    more » « less
  3. Background Early diagnosis is essential for effective stroke therapy. Strokes in hospitalized patients are associated with worse outcomes compared with strokes in the community. We derived and validated an algorithm to identify strokes by monitoring upper limb movements in hospitalized patients. Methods and Results A prospective case–control study in hospitalized patients evaluated bilateral arm accelerometry from patients with acute stroke with lateralized weakness and controls without stroke. We derived a stroke classifier algorithm from 123 controls and 77 acute stroke cases and then validated the performance in a separate cohort of 167 controls and 33 acute strokes, measuring false alarm rates in nonstroke controls and time to detection in stroke cases. Faster detection time was associated with more false alarms. With a median false alarm rate among nonstroke controls of 3.6 (interquartile range [IQR], 2.1–5.0) alarms per patient per day, the median time to detection was 15.0 (IQR, 8.0–73.5) minutes. A median false alarm rate of 1.1 (IQR. 0–2.2) per patient per day was associated with a median time to stroke detection of 29.0 (IQR, 11.0–58.0) minutes. There were no differences in algorithm performance for subgroups dichotomized by age, sex, race, handedness, nondominant hemisphere involvement, intensive care unit versus ward, or daytime versus nighttime. Conclusions Arm movement data can be used to detect asymmetry indicative of stroke in hospitalized patients with a low false alarm rate. Additional studies are needed to demonstrate clinical usefulness. 
    more » « less
  4. False alarms generated by physiological monitors can overwhelm clinical caretakers with a variety of alarms. The resulting alarm fatigue can be mitigated with alarm suppression. Before being deployed, such suppression mechanisms need to be evaluated through a costly observational study, which would determine and label the truly suppressible alarms. This paper proposes a lightweight method for evaluating alarm suppression without access to the true alarm labels. The method is based on the data programming paradigm, which combines noisy and cheap-to-obtain labeling heuristics into probabilistic labels. Based on these labels, the method estimates the sensitivity/specificity of a suppression mechanism and describes the likely outcomes of an observational study in the form of confidence bounds. We evaluate the proposed method in a case study of low SpO2 alarms using a dataset collected at Children's Hospital of Philadelphia and show that our method provides tight and accurate bounds that significantly outperform the naive comparative method. 
    more » « less
  5. null (Ed.)
    Providing reliable model uncertainty estimates is imperative to enabling robust decision making by autonomous agents and humans alike. While recently there have been significant advances in confidence calibration for trained models, examples with poor calibration persist in most calibrated models. Consequently, multiple techniques have been proposed that leverage label-invariant transformations of the input (i.e., an input manifold) to improve worst-case confidence calibration. However, manifold-based confidence calibration techniques generally do not scale and/or require expensive retraining when applied to models with large input spaces (e.g., ImageNet). In this paper, we present the recursive lossy label-invariant calibration (ReCal) technique that leverages label-invariant transformations of the input that induce a loss of discriminatory information to recursively group (and calibrate) inputs – without requiring model retraining. We show that ReCal outperforms other calibration methods on multiple datasets, especially, on large-scale datasets such as ImageNet. 
    more » « less
  6. null (Ed.)
  7. This report presents the results of a friendly competition for formal verification of continuous and hybrid systems with artificial intelligence (AI) components. Specifically, machine learning (ML) components in cyber-physical systems (CPS), such as feedforward neural networks used as feedback controllers in closed-loop systems are considered, which is a class of systems classically known as intelligent control systems, or in more modern and specific terms, neural network control systems (NNCS). We more broadly refer to this category as AI and NNCS (AINNCS). The friendly competition took place as part of the workshop Applied Verification for Continuous and Hybrid Systems (ARCH) in 2021. In the third edition of this AINNCS category at ARCH-COMP, three tools have been applied to solve seven different benchmark problems, (in alphabetical order): JuliaReach, NNV, and Verisig. JuliaReach is a new participant in this category, Verisig participated previously in 2019 and NNV has participated in all previous competitions. This report is a snapshot of the current landscape of tools and the types of benchmarks for which these tools are suited. Due to the diversity of problems, lack of a shared hardware platform, and the early stage of the competition, we are not ranking tools in terms of performance, yet the presented results combined with 2020 results probably provide the most complete assessment of current tools for safety verification of NNCS.

     
    more » « less
  8. This article addresses the problem of verifying the safety of autonomous systems with neural network (NN) controllers. We focus on NNs with sigmoid/tanh activations and use the fact that the sigmoid/tanh is the solution to a quadratic differential equation. This allows us to convert the NN into an equivalent hybrid system and cast the problem as a hybrid system verification problem, which can be solved by existing tools. Furthermore, we improve the scalability of the proposed method by approximating the sigmoid with a Taylor series with worst-case error bounds. Finally, we provide an evaluation over four benchmarks, including comparisons with alternative approaches based on mixed integer linear programming as well as on star sets. 
    more » « less